Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
1.
Sci Rep ; 14(1): 5658, 2024 03 07.
Artículo en Inglés | MEDLINE | ID: mdl-38454072

RESUMEN

In vivo cardiac diffusion tensor imaging (cDTI) is a promising Magnetic Resonance Imaging (MRI) technique for evaluating the microstructure of myocardial tissue in living hearts, providing insights into cardiac function and enabling the development of innovative therapeutic strategies. However, the integration of cDTI into routine clinical practice poses challenging due to the technical obstacles involved in the acquisition, such as low signal-to-noise ratio and prolonged scanning times. In this study, we investigated and implemented three different types of deep learning-based MRI reconstruction models for cDTI reconstruction. We evaluated the performance of these models based on the reconstruction quality assessment, the diffusion tensor parameter assessment as well as the computational cost assessment. Our results indicate that the models discussed in this study can be applied for clinical use at an acceleration factor (AF) of × 2 and × 4 , with the D5C5 model showing superior fidelity for reconstruction and the SwinMR model providing higher perceptual scores. There is no statistical difference from the reference for all diffusion tensor parameters at AF × 2 or most DT parameters at AF × 4 , and the quality of most diffusion tensor parameter maps is visually acceptable. SwinMR is recommended as the optimal approach for reconstruction at AF × 2 and AF × 4 . However, we believe that the models discussed in this study are not yet ready for clinical use at a higher AF. At AF × 8 , the performance of all models discussed remains limited, with only half of the diffusion tensor parameters being recovered to a level with no statistical difference from the reference. Some diffusion tensor parameter maps even provide wrong and misleading information.


Asunto(s)
Aprendizaje Profundo , Imagen de Difusión Tensora , Imagen de Difusión Tensora/métodos , Algoritmos , Imagen por Resonancia Magnética , Espectroscopía de Resonancia Magnética , Imagen de Difusión por Resonancia Magnética/métodos
2.
Comput Methods Programs Biomed ; 246: 108057, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38335865

RESUMEN

BACKGROUND AND OBJECTIVE: 4D flow magnetic resonance imaging provides time-resolved blood flow velocity measurements, but suffers from limitations in spatio-temporal resolution and noise. In this study, we investigated the use of sinusoidal representation networks (SIRENs) to improve denoising and super-resolution of velocity fields measured by 4D flow MRI in the thoracic aorta. METHODS: Efficient training of SIRENs in 4D was achieved by sampling voxel coordinates and enforcing the no-slip condition at the vessel wall. A set of synthetic measurements were generated from computational fluid dynamics simulations, reproducing different noise levels. The influence of SIREN architecture was systematically investigated, and the performance of our method was compared to existing approaches for 4D flow denoising and super-resolution. RESULTS: Compared to existing techniques, a SIREN with 300 neurons per layer and 20 layers achieved lower errors (up to 50% lower vector normalized root mean square error, 42% lower magnitude normalized root mean square error, and 15% lower direction error) in velocity and wall shear stress fields. Applied to real 4D flow velocity measurements in a patient-specific aortic aneurysm, our method produced denoised and super-resolved velocity fields while maintaining accurate macroscopic flow measurements. CONCLUSIONS: This study demonstrates the feasibility of using SIRENs for complex blood flow velocity representation from clinical 4D flow, with quick execution and straightforward implementation.


Asunto(s)
Aorta Torácica , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Velocidad del Flujo Sanguíneo/fisiología , Aorta Torácica/diagnóstico por imagen , Aorta Torácica/fisiología , Estrés Mecánico , Hidrodinámica , Imagenología Tridimensional/métodos
3.
Eur Radiol Exp ; 7(1): 77, 2023 12 07.
Artículo en Inglés | MEDLINE | ID: mdl-38057616

RESUMEN

PURPOSE: To determine if pelvic/ovarian and omental lesions of ovarian cancer can be reliably segmented on computed tomography (CT) using fully automated deep learning-based methods. METHODS: A deep learning model for the two most common disease sites of high-grade serous ovarian cancer lesions (pelvis/ovaries and omentum) was developed and compared against the well-established "no-new-Net" framework and unrevised trainee radiologist segmentations. A total of 451 CT scans collected from four different institutions were used for training (n = 276), evaluation (n = 104) and testing (n = 71) of the methods. The performance was evaluated using the Dice similarity coefficient (DSC) and compared using a Wilcoxon test. RESULTS: Our model outperformed no-new-Net for the pelvic/ovarian lesions in cross-validation, on the evaluation and test set by a significant margin (p values being 4 × 10-7, 3 × 10-4, 4 × 10-2, respectively), and for the omental lesions on the evaluation set (p = 1 × 10-3). Our model did not perform significantly differently in segmenting pelvic/ovarian lesions (p = 0.371) compared to a trainee radiologist. On an independent test set, the model achieved a DSC performance of 71 ± 20 (mean ± standard deviation) for pelvic/ovarian and 61 ± 24 for omental lesions. CONCLUSION: Automated ovarian cancer segmentation on CT scans using deep neural networks is feasible and achieves performance close to a trainee-level radiologist for pelvic/ovarian lesions. RELEVANCE STATEMENT: Automated segmentation of ovarian cancer may be used by clinicians for CT-based volumetric assessments and researchers for building complex analysis pipelines. KEY POINTS: • The first automated approach for pelvic/ovarian and omental ovarian cancer lesion segmentation on CT images has been presented. • Automated segmentation of ovarian cancer lesions can be comparable with manual segmentation of trainee radiologists. • Careful hyperparameter tuning can provide models significantly outperforming strong state-of-the-art baselines.


Asunto(s)
Aprendizaje Profundo , Quistes Ováricos , Neoplasias Ováricas , Humanos , Femenino , Neoplasias Ováricas/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
4.
Artículo en Inglés | MEDLINE | ID: mdl-37983143

RESUMEN

Medical image segmentation is an important task in medical imaging, as it serves as the first step for clinical diagnosis and treatment planning. While major success has been reported using deep learning supervised techniques, they assume a large and well-representative labeled set. This is a strong assumption in the medical domain where annotations are expensive, time-consuming, and inherent to human bias. To address this problem, unsupervised segmentation techniques have been proposed in the literature. Yet, none of the existing unsupervised segmentation techniques reach accuracies that come even near to the state-of-the-art of supervised segmentation methods. In this work, we present a novel optimization model framed in a new convolutional neural network (CNN)-based contrastive registration architecture for unsupervised medical image segmentation called CLMorph. The core idea of our approach is to exploit image-level registration and feature-level contrastive learning, to perform registration-based segmentation. First, we propose an architecture to capture the image-to-image transformation mapping via registration for unsupervised medical image segmentation. Second, we embed a contrastive learning mechanism in the registration architecture to enhance the discriminative capacity of the network at the feature level. We show that our proposed CLMorph technique mitigates the major drawbacks of existing unsupervised techniques. We demonstrate, through numerical and visual experiments, that our technique substantially outperforms the current state-of-the-art unsupervised segmentation methods on two major medical image datasets.

5.
Commun Med (Lond) ; 3(1): 139, 2023 Oct 06.
Artículo en Inglés | MEDLINE | ID: mdl-37803172

RESUMEN

BACKGROUND: Classifying samples in incomplete datasets is a common aim for machine learning practitioners, but is non-trivial. Missing data is found in most real-world datasets and these missing values are typically imputed using established methods, followed by classification of the now complete samples. The focus of the machine learning researcher is to optimise the classifier's performance. METHODS: We utilise three simulated and three real-world clinical datasets with different feature types and missingness patterns. Initially, we evaluate how the downstream classifier performance depends on the choice of classifier and imputation methods. We employ ANOVA to quantitatively evaluate how the choice of missingness rate, imputation method, and classifier method influences the performance. Additionally, we compare commonly used methods for assessing imputation quality and introduce a class of discrepancy scores based on the sliced Wasserstein distance. We also assess the stability of the imputations and the interpretability of model built on the imputed data. RESULTS: The performance of the classifier is most affected by the percentage of missingness in the test data, with a considerable performance decline observed as the test missingness rate increases. We also show that the commonly used measures for assessing imputation quality tend to lead to imputed data which poorly matches the underlying data distribution, whereas our new class of discrepancy scores performs much better on this measure. Furthermore, we show that the interpretability of classifier models trained using poorly imputed data is compromised. CONCLUSIONS: It is imperative to consider the quality of the imputation when performing downstream classification as the effects on the classifier can be considerable.


Many artificial intelligence (AI) methods aim to classify samples of data into groups, e.g., patients with disease vs. those without. This often requires datasets to be complete, i.e., that all data has been collected for all samples. However, in clinical practice this is often not the case and some data can be missing. One solution is to 'complete' the dataset using a technique called imputation to replace those missing values. However, assessing how well the imputation method performs is challenging. In this work, we demonstrate why people should care about imputation, develop a new method for assessing imputation quality, and demonstrate that if we build AI models on poorly imputed data, the model can give different results to those we would hope for. Our findings may improve the utility and quality of AI models in the clinic.

6.
Front Immunol ; 14: 1228812, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37818359

RESUMEN

Background: Pneumonitis is one of the most common adverse events induced by the use of immune checkpoint inhibitors (ICI), accounting for a 20% of all ICI-associated deaths. Despite numerous efforts to identify risk factors and develop predictive models, there is no clinically deployed risk prediction model for patient risk stratification or for guiding subsequent monitoring. We believe this is due to systemic suboptimal approaches in study designs and methodologies in the literature. The nature and prevalence of different methodological approaches has not been thoroughly examined in prior systematic reviews. Methods: The PubMed, medRxiv and bioRxiv databases were used to identify studies that aimed at risk factor discovery and/or risk prediction model development for ICI-induced pneumonitis (ICI pneumonitis). Studies were then analysed to identify common methodological pitfalls and their contribution to the risk of bias, assessed using the QUIPS and PROBAST tools. Results: There were 51 manuscripts eligible for the review, with Japan-based studies over-represented, being nearly half (24/51) of all papers considered. Only 2/51 studies had a low risk of bias overall. Common bias-inducing practices included unclear diagnostic method or potential misdiagnosis, lack of multiple testing correction, the use of univariate analysis for selecting features for multivariable analysis, discretization of continuous variables, and inappropriate handling of missing values. Results from the risk model development studies were also likely to have been overoptimistic due to lack of holdout sets. Conclusions: Studies with low risk of bias in their methodology are lacking in the existing literature. High-quality risk factor identification and risk model development studies are urgently required by the community to give the best chance of them progressing into a clinically deployable risk prediction model. Recommendations and alternative approaches for reducing the risk of bias were also discussed to guide future studies.


Asunto(s)
Neumonía , Humanos , Japón , Neumonía/diagnóstico , Neumonía/inducido químicamente , Factores de Riesgo , Revisiones Sistemáticas como Asunto
7.
Diagnostics (Basel) ; 13(17)2023 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-37685352

RESUMEN

Artificial intelligence (AI) methods applied to healthcare problems have shown enormous potential to alleviate the burden of health services worldwide and to improve the accuracy and reproducibility of predictions. In particular, developments in computer vision are creating a paradigm shift in the analysis of radiological images, where AI tools are already capable of automatically detecting and precisely delineating tumours. However, such tools are generally developed in technical departments that continue to be siloed from where the real benefit would be achieved with their usage. Significant effort still needs to be made to make these advancements available, first in academic clinical research and ultimately in the clinical setting. In this paper, we demonstrate a prototype pipeline based entirely on open-source software and free of cost to bridge this gap, simplifying the integration of tools and models developed within the AI community into the clinical research setting, ensuring an accessible platform with visualisation applications that allow end-users such as radiologists to view and interact with the outcome of these AI tools.

8.
Herit Sci ; 11(1): 180, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37638147

RESUMEN

Medieval paper, a handmade product, is made with a mould which leaves an indelible imprint on the sheet of paper. This imprint includes chain lines, laid lines and watermarks which are often visible on the sheet. Extracting these features allows the identification of the paper stock and gives information about the chronology, localisation and movement of manuscripts and people. Most computational work for feature extraction of paper analysis has so far focused on radiography or transmitted light images. While these imaging methods provide clear visualisation of the features of interest, they are expensive and time consuming in their acquisition and not feasible for smaller institutions. However, reflected light images of medieval paper manuscripts are abundant and possibly cheaper in their acquisition. In this paper, we propose algorithms to detect and extract the laid and chain lines from reflected light images. We tackle the main drawback of reflected light images, that is, the low contrast attenuation of chain and laid lines and intensity jumps due to noise and degradation, by employing the spectral total variation decomposition and develop methods for subsequent chain and laid line extraction. Our results clearly demonstrate the feasibility of using reflected light images in paper analysis. This work enables feature extraction for paper manuscripts that have otherwise not been analysed due to a lack of appropriate images. We also open the door for paper stock identification at scale.

9.
Sci Data ; 10(1): 493, 2023 07 27.
Artículo en Inglés | MEDLINE | ID: mdl-37500661

RESUMEN

The National COVID-19 Chest Imaging Database (NCCID) is a centralized UK database of thoracic imaging and corresponding clinical data. It is made available by the National Health Service Artificial Intelligence (NHS AI) Lab to support the development of machine learning tools focused on Coronavirus Disease 2019 (COVID-19). A bespoke cleaning pipeline for NCCID, developed by the NHSx, was introduced in 2021. We present an extension to the original cleaning pipeline for the clinical data of the database. It has been adjusted to correct additional systematic inconsistencies in the raw data such as patient sex, oxygen levels and date values. The most important changes will be discussed in this paper, whilst the code and further explanations are made publicly available on GitLab. The suggested cleaning will allow global users to work with more consistent data for the development of machine learning tools without being an expert. In addition, it highlights some of the challenges when working with clinical multi-center data and includes recommendations for similar future initiatives.


Asunto(s)
COVID-19 , Tórax , Humanos , Inteligencia Artificial , Aprendizaje Automático , Medicina Estatal , Radiografía Torácica , Tórax/diagnóstico por imagen
10.
Phys Med Biol ; 68(15)2023 07 19.
Artículo en Inglés | MEDLINE | ID: mdl-37192631

RESUMEN

Krylov subspace methods are a powerful family of iterative solvers for linear systems of equations, which are commonly used for inverse problems due to their intrinsic regularization properties. Moreover, these methods are naturally suited to solve large-scale problems, as they only require matrix-vector products with the system matrix (and its adjoint) to compute approximate solutions, and they display a very fast convergence. Even if this class of methods has been widely researched and studied in the numerical linear algebra community, its use in applied medical physics and applied engineering is still very limited. e.g. in realistic large-scale computed tomography (CT) problems, and more specifically in cone beam CT (CBCT). This work attempts to breach this gap by providing a general framework for the most relevant Krylov subspace methods applied to 3D CT problems, including the most well-known Krylov solvers for non-square systems (CGLS, LSQR, LSMR), possibly in combination with Tikhonov regularization, and methods that incorporate total variation regularization. This is provided within an open source framework: the tomographic iterative GPU-based reconstruction toolbox, with the idea of promoting accessibility and reproducibility of the results for the algorithms presented. Finally, numerical results in synthetic and real-world 3D CT applications (medical CBCT andµ-CT datasets) are provided to showcase and compare the different Krylov subspace methods presented in the paper, as well as their suitability for different kinds of problems.


Asunto(s)
Tomografía Computarizada de Haz Cónico Espiral , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X , Algoritmos , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen
11.
IEEE Trans Med Imaging ; 42(11): 3167-3178, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37022918

RESUMEN

The isocitrate dehydrogenase (IDH) gene mutation is an essential biomarker for the diagnosis and prognosis of glioma. It is promising to better predict glioma genotype by integrating focal tumor image and geometric features with brain network features derived from MRI. Convolutional neural networks show reasonable performance in predicting IDH mutation, which, however, cannot learn from non-Euclidean data, e.g., geometric and network data. In this study, we propose a multi-modal learning framework using three separate encoders to extract features of focal tumor image, tumor geometrics and global brain networks. To mitigate the limited availability of diffusion MRI, we develop a self-supervised approach to generate brain networks from anatomical multi-sequence MRI. Moreover, to extract tumor-related features from the brain network, we design a hierarchical attention module for the brain network encoder. Further, we design a bi-level multi-modal contrastive loss to align the multi-modal features and tackle the domain gap at the focal tumor and global brain. Finally, we propose a weighted population graph to integrate the multi-modal features for genotype prediction. Experimental results on the testing set show that the proposed model outperforms the baseline deep learning models. The ablation experiments validate the performance of different components of the framework. The visualized interpretation corresponds to clinical knowledge with further validation. In conclusion, the proposed learning framework provides a novel approach for predicting the genotype of glioma.


Asunto(s)
Neoplasias Encefálicas , Glioma , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/genética , Neoplasias Encefálicas/patología , Glioma/diagnóstico por imagen , Glioma/genética , Glioma/patología , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Genotipo , Isocitrato Deshidrogenasa/genética
12.
J Digit Imaging ; 36(2): 739-752, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36474089

RESUMEN

The Dice similarity coefficient (DSC) is both a widely used metric and loss function for biomedical image segmentation due to its robustness to class imbalance. However, it is well known that the DSC loss is poorly calibrated, resulting in overconfident predictions that cannot be usefully interpreted in biomedical and clinical practice. Performance is often the only metric used to evaluate segmentations produced by deep neural networks, and calibration is often neglected. However, calibration is important for translation into biomedical and clinical practice, providing crucial contextual information to model predictions for interpretation by scientists and clinicians. In this study, we provide a simple yet effective extension of the DSC loss, named the DSC++ loss, that selectively modulates the penalty associated with overconfident, incorrect predictions. As a standalone loss function, the DSC++ loss achieves significantly improved calibration over the conventional DSC loss across six well-validated open-source biomedical imaging datasets, including both 2D binary and 3D multi-class segmentation tasks. Similarly, we observe significantly improved calibration when integrating the DSC++ loss into four DSC-based loss functions. Finally, we use softmax thresholding to illustrate that well calibrated outputs enable tailoring of recall-precision bias, which is an important post-processing technique to adapt the model predictions to suit the biomedical or clinical task. The DSC++ loss overcomes the major limitation of the DSC loss, providing a suitable loss function for training deep learning segmentation models for use in biomedical and clinical practice. Source code is available at https://github.com/mlyg/DicePlusPlus .


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
13.
Brain ; 146(4): 1714-1727, 2023 04 19.
Artículo en Inglés | MEDLINE | ID: mdl-36189936

RESUMEN

Glioblastoma is characterized by diffuse infiltration into the surrounding tissue along white matter tracts. Identifying the invisible tumour invasion beyond focal lesion promises more effective treatment, which remains a significant challenge. It is increasingly accepted that glioblastoma could widely affect brain structure and function, and further lead to reorganization of neural connectivity. Quantifying neural connectivity in glioblastoma may provide a valuable tool for identifying tumour invasion. Here we propose an approach to systematically identify tumour invasion by quantifying the structural connectome in glioblastoma patients. We first recruit two independent prospective glioblastoma cohorts: the discovery cohort with 117 patients and validation cohort with 42 patients. Next, we use diffusion MRI of healthy subjects to construct tractography templates indicating white matter connection pathways between brain regions. Next, we construct fractional anisotropy skeletons from diffusion MRI using an improved voxel projection approach based on the tract-based spatial statistics, where the strengths of white matter connection and brain regions are estimated. To quantify the disrupted connectome, we calculate the deviation of the connectome strengths of patients from that of the age-matched healthy controls. We then categorize the disruption into regional disruptions on the basis of the relative location of connectome to focal lesions. We also characterize the topological properties of the patient connectome based on the graph theory. Finally, we investigate the clinical, cognitive and prognostic significance of connectome metrics using Pearson correlation test, mediation test and survival models. Our results show that the connectome disruptions in glioblastoma patients are widespread in the normal-appearing brain beyond focal lesions, associated with lower preoperative performance (P < 0.001), impaired cognitive function (P < 0.001) and worse survival (overall survival: hazard ratio = 1.46, P = 0.049; progression-free survival: hazard ratio = 1.49, P = 0.019). Additionally, these distant disruptions mediate the effect on topological alterations of the connectome (mediation effect: clustering coefficient -0.017, P < 0.001, characteristic path length 0.17, P = 0.008). Further, the preserved connectome in the normal-appearing brain demonstrates evidence of connectivity reorganization, where the increased neural connectivity is associated with better overall survival (log-rank P = 0.005). In conclusion, our connectome approach could reveal and quantify the glioblastoma invasion distant from the focal lesion and invisible on the conventional MRI. The structural disruptions in the normal-appearing brain were associated with the topological alteration of the brain and could indicate treatment target. Our approach promises to aid more accurate patient stratification and more precise treatment planning.


Asunto(s)
Conectoma , Glioblastoma , Sustancia Blanca , Humanos , Conectoma/métodos , Glioblastoma/diagnóstico por imagen , Glioblastoma/patología , Imagen de Difusión Tensora/métodos , Estudios Prospectivos , Encéfalo/patología , Sustancia Blanca/patología
14.
Artículo en Inglés | MEDLINE | ID: mdl-38550952

RESUMEN

Renal cancer is responsible for over 100,000 yearly deaths and is principally discovered in computed tomography (CT) scans of the abdomen. CT screening would likely increase the rate of early renal cancer detection, and improve general survival rates, but it is expected to have a prohibitively high financial cost. Given recent advances in artificial intelligence (AI), it may be possible to reduce the cost of CT analysis and enable CT screening by automating the radiological tasks that constitute the early renal cancer detection pipeline. This review seeks to facilitate further interdisciplinary research in early renal cancer detection by summarising our current knowledge across AI, radiology, and oncology and suggesting useful directions for future novel work. Initially, this review discusses existing approaches in automated renal cancer diagnosis, and methods across broader AI research, to summarise the existing state of AI cancer analysis. Then, this review matches these methods to the unique constraints of early renal cancer detection and proposes promising directions for future research that may enable AI-based early renal cancer detection via CT screening. The primary targets of this review are clinicians with an interest in AI and data scientists with an interest in the early detection of cancer.

15.
J Math Imaging Vis ; 64(9): 968-992, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36329880

RESUMEN

We study the problem of deconvolution for light-sheet microscopy, where the data is corrupted by spatially varying blur and a combination of Poisson and Gaussian noise. The spatial variation of the point spread function of a light-sheet microscope is determined by the interaction between the excitation sheet and the detection objective PSF. We introduce a model of the image formation process that incorporates this interaction and we formulate a variational model that accounts for the combination of Poisson and Gaussian noise through a data fidelity term consisting of the infimal convolution of the single noise fidelities, first introduced in L. Calatroni et al. (SIAM J Imaging Sci 10(3):1196-1233, 2017). We establish convergence rates and a discrepancy principle for the infimal convolution fidelity and the inverse problem is solved by applying the primal-dual hybrid gradient (PDHG) algorithm in a novel way. Numerical experiments performed on simulated and real data show superior reconstruction results in comparison with other methods.

16.
Med Image Anal ; 82: 102618, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36183607

RESUMEN

The task of classifying mammograms is very challenging because the lesion is usually small in the high resolution image. The current state-of-the-art approaches for medical image classification rely on using the de-facto method for convolutional neural networks-fine-tuning. However, there are fundamental differences between natural images and medical images, which based on existing evidence from the literature, limits the overall performance gain when designed with algorithmic approaches. In this paper, we propose to go beyond fine-tuning by introducing a novel framework called MorphHR, in which we highlight a new transfer learning scheme. The idea behind the proposed framework is to integrate function-preserving transformations, for any continuous non-linear activation neurons, to internally regularise the network for improving mammograms classification. The proposed solution offers two major advantages over the existing techniques. Firstly and unlike fine-tuning, the proposed approach allows for modifying not only the last few layers but also several of the first ones on a deep ConvNet. By doing this, we can design the network front to be suitable for learning domain specific features. Secondly, the proposed scheme is scalable to hardware. Therefore, one can fit high resolution images on standard GPU memory. We show that by using high resolution images, one prevents losing relevant information. We demonstrate, through numerical and visual experiments, that the proposed approach yields to a significant improvement in the classification performance over state-of-the-art techniques, and is indeed on a par with radiology experts. Moreover and for generalisation purposes, we show the effectiveness of the proposed learning scheme on another large dataset, the ChestX-ray14, surpassing current state-of-the-art techniques.


Asunto(s)
Mamografía , Redes Neurales de la Computación , Humanos , Mamografía/métodos
17.
PLoS One ; 17(9): e0272963, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36048759

RESUMEN

Breast cancer remains the most prevalent malignancy in women in many countries around the world, thus calling for better imaging technologies to improve screening and diagnosis. Grating interferometry (GI)-based phase contrast X-ray CT is a promising technique which could make the transition to clinical practice and improve breast cancer diagnosis by combining the high three-dimensional resolution of conventional CT with higher soft-tissue contrast. Unfortunately though, obtaining high-quality images is challenging. Grating fabrication defects and photon starvation lead to high noise amplitudes in the measured data. Moreover, the highly ill-conditioned differential nature of the GI-CT forward operator renders the inversion from corrupted data even more cumbersome. In this paper, we propose a novel regularized iterative reconstruction algorithm with an improved tomographic operator and a powerful data-driven regularizer to tackle this challenging inverse problem. Our algorithm combines the L-BFGS optimization scheme with a data-driven prior parameterized by a deep neural network. Importantly, we propose a novel regularization strategy to ensure that the trained network is non-expansive, which is critical for the convergence and stability analysis we provide. We empirically show that the proposed method achieves high quality images, both on simulated data as well as on real measurements.


Asunto(s)
Neoplasias de la Mama , Tomografía Computarizada por Rayos X , Algoritmos , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Tomografía , Tomografía Computarizada por Rayos X/métodos
18.
Artículo en Inglés | MEDLINE | ID: mdl-36136921

RESUMEN

Semisupervised learning (SSL) has received a lot of recent attention as it alleviates the need for large amounts of labeled data which can often be expensive, requires expert knowledge, and be time consuming to collect. Recent developments in deep semisupervised classification have reached unprecedented performance and the gap between supervised and SSL is ever-decreasing. This improvement in performance has been based on the inclusion of numerous technical tricks, strong augmentation techniques, and costly optimization schemes with multiterm loss functions. We propose a new framework, LaplaceNet, for deep semisupervised classification that has a greatly reduced model complexity. We utilize a hybrid approach where pseudolabels are produced by minimizing the Laplacian energy on a graph. These pseudolabels are then used to iteratively train a neural-network backbone. Our model outperforms state-of-the-art methods for deep semisupervised classification, over several benchmark datasets. Furthermore, we consider the application of strong augmentations to neural networks theoretically and justify the use of a multisampling approach for SSL. We demonstrate, through rigorous experimentation, that a multisampling augmentation approach improves generalization and reduces the sensitivity of the network to augmentation.

19.
Sci Rep ; 12(1): 10001, 2022 06 15.
Artículo en Inglés | MEDLINE | ID: mdl-35705591

RESUMEN

Cell Painting is a high-content image-based assay applied in drug discovery to predict bioactivity, assess toxicity and understand mechanisms of action of chemical and genetic perturbations. We investigate label-free Cell Painting by predicting the five fluorescent Cell Painting channels from brightfield input. We train and validate two deep learning models with a dataset representing 17 batches, and we evaluate on batches treated with compounds from a phenotypic set. The mean Pearson correlation coefficient of the predicted images across all channels is 0.84. Without incorporating features into the model training, we achieved a mean correlation of 0.45 with ground truth features extracted using a segmentation-based feature extraction pipeline. Additionally, we identified 30 features which correlated greater than 0.8 to the ground truth. Toxicity analysis on the label-free Cell Painting resulted a sensitivity of 62.5% and specificity of 99.3% on images from unseen batches. We provide a breakdown of the feature profiles by channel and feature type to understand the potential and limitations of label-free morphological profiling. We demonstrate that label-free Cell Painting has the potential to be used for downstream analyses and could allow for repurposing imaging channels for other non-generic fluorescent stains of more targeted biological interest.


Asunto(s)
Bioensayo , Descubrimiento de Drogas , Bioensayo/métodos , Procesamiento de Imagen Asistido por Computador/métodos
20.
Med Phys ; 49(6): 3729-3748, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35257395

RESUMEN

PURPOSE: Breast cancer is the most common malignancy in women. Unfortunately, current breast imaging techniques all suffer from certain limitations: they are either not fully three dimensional, have an insufficient resolution or low soft-tissue contrast. Grating interferometry breast computed tomography (GI-BCT) is a promising X-ray phase contrast modality that could overcome these limitations by offering high soft-tissue contrast and excellent three-dimensional resolution. To enable the transition of this technology to clinical practice, dedicated data-processing algorithms must be developed in order to effectively retrieve the signals of interest from the measured raw data. METHODS: This article proposes a novel denoising algorithm that can cope with the high-noise amplitudes and heteroscedasticity which arise in GI-BCT when operated in a low-dose regime to effectively regularize the ill-conditioned GI-BCT inverse problem. We present a data-driven algorithm called INSIDEnet, which combines different ideas such as multiscale image processing, transform-domain filtering, transform learning, and explicit orthogonality to build an Interpretable NonexpanSIve Data-Efficient network (INSIDEnet). RESULTS: We apply the method to simulated breast phantom datasets and to real data acquired on a GI-BCT prototype and show that the proposed algorithm outperforms traditional state-of-the-art filters and is competitive with deep neural networks. The strong inductive bias given by the proposed model's architecture allows to reliably train the algorithm with very limited data while providing high model interpretability, thus offering a great advantage over classical convolutional neural networks (CNNs). CONCLUSIONS: The proposed INSIDEnet is highly data-efficient, interpretable, and outperforms state-of-the-art CNNs when trained on very limited training data. We expect the proposed method to become an important tool as part of a dedicated plug-and-play GI-BCT reconstruction framework, needed to translate this promising technology to the clinics.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Algoritmos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Interferometría , Fantasmas de Imagen , Relación Señal-Ruido , Tórax , Tomografía Computarizada por Rayos X/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...